25 research outputs found

    Parameterized Complexity of Edge Interdiction Problems

    Full text link
    We study the parameterized complexity of interdiction problems in graphs. For an optimization problem on graphs, one can formulate an interdiction problem as a game consisting of two players, namely, an interdictor and an evader, who compete on an objective with opposing interests. In edge interdiction problems, every edge of the input graph has an interdiction cost associated with it and the interdictor interdicts the graph by modifying the edges in the graph, and the number of such modifications is constrained by the interdictor's budget. The evader then solves the given optimization problem on the modified graph. The action of the interdictor must impede the evader as much as possible. We focus on edge interdiction problems related to minimum spanning tree, maximum matching and shortest paths. These problems arise in different real world scenarios. We derive several fixed-parameter tractability and W[1]-hardness results for these interdiction problems with respect to various parameters. Next, we show close relation between interdiction problems and partial cover problems on bipartite graphs where the goal is not to cover all elements but to minimize/maximize the number of covered elements with specific number of sets. Hereby, we investigate the parameterized complexity of several partial cover problems on bipartite graphs

    Employing Machine Learning to Advance Agent-based Modeling in Information Systems Research

    Get PDF
    In recent years, computationally intensive theory construction, leveraging big data and machine learning (ML), has gained significant interest in the information systems (IS) community. The integration of computational methods can generate novel methodological paradigms or enhance existing methods. Agent-based modeling (ABM) is one of the computational methods that has recently proliferated in IS research to generate computationally intensive theories. However, ABM is still in nascent state of adoption in IS research and entails some pathological challenges that limit its applicability and robustness. With the goal of advancing ABM in IS research, this article proposes a methodological framework that integrates ML within relevant steps of ABM. The framework is demonstrated in an exemplary IS study, showing its potential for addressing the pathological challenges of ABM. We finally discuss the implications of applying the proposed methodological framework in IS research

    Large Language Models for Difficulty Estimation of Foreign Language Content with Application to Language Learning

    Full text link
    We use large language models to aid learners enhance proficiency in a foreign language. This is accomplished by identifying content on topics that the user is interested in, and that closely align with the learner's proficiency level in that foreign language. Our work centers on French content, but our approach is readily transferable to other languages. Our solution offers several distinctive characteristics that differentiate it from existing language-learning solutions, such as, a) the discovery of content across topics that the learner cares about, thus increasing motivation, b) a more precise estimation of the linguistic difficulty of the content than traditional readability measures, and c) the availability of both textual and video-based content. The linguistic complexity of video content is derived from the video captions. It is our aspiration that such technology will enable learners to remain engaged in the language-learning process by continuously adapting the topics and the difficulty of the content to align with the learners' evolving interests and learning objectives

    Data Augmentation through Pseudolabels in Automatic Region Based Coronary Artery Segmentation for Disease Diagnosis

    Full text link
    Coronary Artery Diseases(CADs) though preventable are one of the leading causes of death and disability. Diagnosis of these diseases is often difficult and resource intensive. Segmentation of arteries in angiographic images has evolved as a tool for assistance, helping clinicians in making accurate diagnosis. However, due to the limited amount of data and the difficulty in curating a dataset, the task of segmentation has proven challenging. In this study, we introduce the idea of using pseudolabels as a data augmentation technique to improve the performance of the baseline Yolo model. This method increases the F1 score of the baseline by 9% in the validation dataset and by 3% in the test dataset.Comment: arXiv admin note: text overlap with arXiv:2310.0474

    DIGITAL TRACE DATA RESEARCH IN INFORMATION SYSTEMS: OPPORTUNITIES AND CHALLENGES

    Get PDF
    Digital trace data research is an emerging paradigm in Information Systems (IS). Whether for theory development or theory testing, IS scholars increasingly draw on data that are generated as actors use information technology. Because they are ‘digital’ in nature, these data are particularly suitable for computational analysis, i.e. analysis with the aid of algorithms. In turn, this opens up new possibilities for data analysis, such as process mining, text mining, and network analysis. At the same time, the increasing use of digital trace data for research purposes also raises questions and potential issues that the research community needs to address. For example, one key question is what constitutes a valid contribution to the body of knowledge and how digital trace data research influences our collective identity as a field? In this panel, we will discuss opportunities and challenges associated with digital trace data research. Reflecting on the panelists’ and the audience’s experience, we will point to strategies to mitigate common pitfalls and outline promising research avenues

    Kernelization and Parameterized Complexity of Star Editing and Union Editing

    No full text
    corecore